神经记录的进展现在在前所未有的细节中研究神经活动的机会。潜在的变量模型(LVMS)是用于分析各种神经系统和行为的丰富活动的有希望的工具,因为LVM不依赖于活动与外部实验变量之间的已知关系。然而,目前缺乏标准化目前阻碍了对神经元群体活性的LVM进行的进展,导致采用临时方式进行和比较方法。为协调这些建模工作,我们为神经人群活动的潜在变量建模介绍了基准套件。我们从认知,感官和机动领域策划了四种神经尖峰活动的数据集,以促进适用于这些地区各地的各种活动的模型。我们将无监督的评估视为用于评估数据集的模型的共同框架,并应用几个显示基准多样性的基线。我们通过评估释放此基准。 http://neurallatents.github.io.
translated by 谷歌翻译
Machine learning (ML) has found broad applicability in quantum information science in topics as diverse as experimental design, state classification, and even studies on quantum foundations. Here, we experimentally realize an approach for defining custom prior distributions that are automatically tuned using ML for use with Bayesian quantum state estimation methods. Previously, researchers have looked to Bayesian quantum state tomography due to its unique advantages like natural uncertainty quantification, the return of reliable estimates under any measurement condition, and minimal mean-squared error. However, practical challenges related to long computation times and conceptual issues concerning how to incorporate prior knowledge most suitably can overshadow these benefits. Using both simulated and experimental measurement results, we demonstrate that ML-defined prior distributions reduce net convergence times and provide a natural way to incorporate both implicit and explicit information directly into the prior distribution. These results constitute a promising path toward practical implementations of Bayesian quantum state tomography.
translated by 谷歌翻译
Prompt Tuning, conditioning on task-specific learned prompt vectors, has emerged as a data-efficient and parameter-efficient method for adapting large pretrained vision-language models to multiple downstream tasks. However, existing approaches usually consider learning prompt vectors for each task independently from scratch, thereby failing to exploit the rich shareable knowledge across different vision-language tasks. In this paper, we propose multitask vision-language prompt tuning (MVLPT), which incorporates cross-task knowledge into prompt tuning for vision-language models. Specifically, (i) we demonstrate the effectiveness of learning a single transferable prompt from multiple source tasks to initialize the prompt for each target task; (ii) we show many target tasks can benefit each other from sharing prompt vectors and thus can be jointly learned via multitask prompt tuning. We benchmark the proposed MVLPT using three representative prompt tuning methods, namely text prompt tuning, visual prompt tuning, and the unified vision-language prompt tuning. Results in 20 vision tasks demonstrate that the proposed approach outperforms all single-task baseline prompt tuning methods, setting the new state-of-the-art on the few-shot ELEVATER benchmarks and cross-task generalization benchmarks. To understand where the cross-task knowledge is most effective, we also conduct a large-scale study on task transferability with 20 vision tasks in 400 combinations for each prompt tuning method. It shows that the most performant MVLPT for each prompt tuning method prefers different task combinations and many tasks can benefit each other, depending on their visual similarity and label similarity. Code is available at https://github.com/sIncerass/MVLPT.
translated by 谷歌翻译
We study a novel and important communication pattern in large-scale model-parallel deep learning (DL), which we call cross-mesh resharding. This pattern emerges when the two paradigms of model parallelism - intra-operator and inter-operator parallelism - are combined to support large models on large clusters. In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh, on which the tensor may be distributed with the same or different layouts. We formalize this as a many-to-many multicast communication problem, and show that existing approaches either are sub-optimal or do not generalize to different network topologies or tensor layouts, which result from different model architectures and parallelism strategies. We then propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule. On microbenchmarks, our overall system outperforms existing ones by up to 10x across various tensor and mesh layouts. On end-to-end training of two large models, GPT-3 and U-Transformer, we improve throughput by 10% and 50%, respectively.
translated by 谷歌翻译
几种慢性肺疾病,例如特发性肺纤维化(IPF)的特征是气道异常扩张。计算机断层扫描(CT)上气道特征的定量可以帮助表征疾病进展。已经开发了基于物理的气道测量算法,但由于在临床实践中看到的气道形态多样性,因此取得了有限的成功。由于获得精确的气道注释的高成本,监督学习方法也不可行。我们建议使用感知损失通过样式转移进行综合气道,以训练我们的模型气道转移网络(ATN)。我们使用a)定性评估将ATN模型与最先进的GAN网络(SIMGAN)进行比较; b)评估基于ATN和SIMGAN的CT气道指标预测113例IPF患者死亡率的能力。与Simgan相比,ATN被证明更快,更容易训练。还发现基于ATN的气道测量值始终比IPF CTS上的SIMGAN衍生气道指标更强大。通过转化网络使用感知损失来完善合成数据的转化网络是基于GAN的方法的现实替代方法,用于用于特发性肺纤维化的临床CT分析。我们的源代码可以在https://github.com/ashkanpakzad/atn上找到,该源代码与Airquant的现有开放源气道分析框架兼容。
translated by 谷歌翻译
表示学习通常通过管理维度的诅咒在加强学习中起关键作用。代表性的算法类别利用了随机过渡动力学的光谱分解,以构建在理想化环境中具有强大理论特性的表示。但是,当前的光谱方法的适用性有限,因为它们是用于仅国家的聚合并源自策略依赖性过渡内核的,而无需考虑勘探问题。为了解决这些问题,我们提出了一种替代光谱方法,光谱分解表示(SPEDER),该方法从动力学中提取了国家行动抽象而不诱导虚假依赖数据收集策略,同时还可以平衡探索访问权分析交易 - 在学习过程中关闭。理论分析确定了在线和离线设置中所提出的算法的样本效率。此外,一项实验研究表明,在几个基准测试中,比当前的最新算法表现出色。
translated by 谷歌翻译
电缆在许多环境中无处不在,但容易出现自我闭合和结,使它们难以感知和操纵。挑战通常会随着电缆长度而增加:长电缆需要更复杂的松弛管理和策略,以促进可观察性和可及性。在本文中,我们专注于使用双边机器人自动弄清长达3米的电缆。我们开发了新的运动原语,以有效地解开长电缆和专门用于此任务的新型Gripper Jaws。我们提出了缠结操作(SGTM)的滑动和抓握,该算法将这些原始物与RGBD视觉构成迭代性毫无障碍。SGTM在隔离的外手上取消了67%的成功率,图8节和更复杂的配置上的50%。可以在https://sites.google.com/view/rss-2022-untangling/home上找到补充材料,可视化和视频。
translated by 谷歌翻译
手机等边缘设备上的微调模型将对敏感数据实现隐私的个性化。但是,在历史上,Edge训练仅限于具有简单体系结构的相对较小的模型,因为训练既是记忆力和能量密集型的。我们提出了诗人,这是一种算法,可以在存储器筛分的边缘设备上训练大型神经网络。诗人共同优化了重新布置和分页的综合搜索搜索空间,这两种算法可减少返回传播的记忆消耗。鉴于记忆预算和运行时间的限制,我们制定了一项混合成员线性计划(MILP),以进行最佳培训。我们的方法使培训能够在嵌入式设备上显着更大的模型,同时减少能源消耗,同时不修改反向传播的数学正确性。我们证明,可以在皮质类嵌入式设备的内存约束中微调RESNET-18和BERT,同时在能源效率方面的当前边缘训练方法的表现。诗人是一个开源项目,网址为https://github.com/shishirpatil/poet
translated by 谷歌翻译
通常通过利用低级别表示来解决马尔可夫决策过程(MDP)中维度的诅咒。这激发了有关线性MDP的最新理论研究。但是,大多数方法在不切实际的假设下对分解的归一化或在实践中引入未解决的计算挑战。相反,我们考虑了线性MDP的替代定义,该定义自动确保正常化,同时允许通过对比度估计进行有效的表示。该框架还承认了置信度调整的索引算法,从而使面对不确定性的乐观或悲观主义,使得有效而有原则的方法。据我们所知,这为线性MDP提供了第一种实用的表示学习方法,该方法既可以实现强大的理论保证和经验绩效。从理论上讲,我们证明所提出的算法在在线和离线设置中均有效。从经验上讲,我们在几个基准测试中表现出优于现有基于模型的现有模型和无模型算法的卓越性能。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译